Authors: Jozef Barunik
Abstract:
Jozef Barunik Associate Professor, Academy of Sciences and Charles University
Jozef Baruník is an Associate Professor at the Institute of Economic Studies, Charles University in Prague. He also serves as a head of the Econometrics department at the Czech Academy of Sciences. In his research, he develops mathematical models for understanding financial problems (such as measuring and managing financial risk), develops statistical methods and analyzes financial data. Especially, he is interested in asset pricing, high-frequency data, financial econometrics, machine learning, high-dimensional financial data sets (big data), and frequency domain econometrics (cyclical properties and behavior of economic variables).
Authors: Bernard Silverman
Abstract: It is only in recent decades that the crime of Modern Slavery/Human Trafficking has become well established in the political and public consciousness. I will discuss the part that statistical approaches have played and the actual and potential contribution of machine learning. One important issue is the right way to deal with estimates that are known to be inaccurate but nevertheless may have beneficial impact. There are some analogies with the public health achievements of Florence Nightingale in the 19th century, which saved many millions of lives even before the advent of modern scientific medicine.
Authors: Caio Almeida, Gustavo Freire, René Garcia and Rodrigo Hizmeri
Abstract: We combine high-frequency stock returns with risk-neutralization to extract the daily common component of tail risks perceived by investors in the cross-section of firms. Our tail risk measure significantly predicts the equity premium and variance risk premium at short-horizons. Furthermore, a long-short portfolio built by sorting stocks on their recent exposure to tail risk generates abnormal returns with respect to standard factor models and helps explain the momentum anomaly. Incorporating investors' preferences via risk-neutralization is fundamental to our findings.
View presentationAuthors: Vincent Gurgul and Duygu Ider and Stefan Lessmann
Abstract: Anticipating price developments in financial markets is a topic of continued interest in forecasting. Funneled by advancements in deep learning and natural language processing (NLP) together with the availability of vast amounts of textual data in form of news articles, social media postings, etc., an increasing number of studies incorporate text-based predictors in forecasting models. The hypothesis that text-based cues carry predictive information appears especially plausible for cryptocurrency markets, which are the subject of this study. Noting some variation in how specifically prior work employs text features and advanced pre-trained NLP models for price modeling, the first research goal is to survey available options and systematically compare selected approaches. For example, many studies convert raw text into sentiment signals and use these as auxiliary inputs in a financial forecasting model. Concentrating on sentiment and, thereby, disregarding the actual content of the text, one may ask whether the sentiment extraction approach is suitable. In this context, we emphasize weak labeling, an NLP approach to finetune text classifiers on an unlabeled target corpus, which holds potential for financial forecasting but has received little attention in the corresponding literature. A second research goal concerns the well-known curse of dimensionality. Combining fundamental, technical, and text-based features in a forecasting model increases dimensionality substantially. This challenge is amplified in cryptocurrency forecasting in that yet another set of potentially relevant features can be extracted from the blockchain. We examine the predictive value of the different feature groups and benchmark alternative forecasting methods for high-dimensional time series data. In this context, we emphasize the temporal fusion transformer, which promises automatic feature selection, and assess its effectiveness vis-a-vis benchmark forecasting models. In sum, our study contributes original empirical evidence to the literature on cryptocurrency price forecasting. We also introduce several advanced deep learning methodologies for text processing and time series forecasting to the financial market modeling community, which are generally applicable in the field and can benefit financial forecasting at large.
Authors: Qingfu Liu (Fudan University), Zhidan Luo (Fudan University), Yang Song (Washington University) and Chuanjie Wang (Fudan University)
Abstract: The live streaming economy, particularly live streaming e-commerce, is becoming a major component of China's digital economic development. This study investigates the impact of live streaming by fund companies on mutual fund performance, using data from over 10,000 live streams by 99 fund companies on China's leading fund live streaming platform, TianTian Fund APP. We employ machine learning techniques to process the live stream content and construct the 4Vs (Visual, Vocal, Verbal, and Visitor) to measure different aspects of the content. Our analysis reveals that participating in live streaming significantly increases investor attention and fund flow, but negatively affects performance. The 4Vs have heterogeneous effects on investor attention, fund flow, and performance. This research offers empirical evidence for the impact of live streaming on mutual fund performance and contributes to understanding the role of live streaming in the e-commerce industry.
Qingfu Liu Professor, School of Economics, Fudan University
Dr. Qingfu Liu, a professor and Ph.D. advisor at the School of Economics, Fudan University. He holds a Ph.D. in Management Science and Engineering from Southeast University, completed his postdoctoral fellowship in Finance at Fudan University, and was a visiting scholar at Stanford University in the United States. In 2017, he was selected for the Shanghai Pujiang Program. He currently holds multiple roles including serving as the Executive Dean of the Fudan-Stanford Institute for China Financial Technology and Risk Analytics, the Academic Vice Dean of the Fudan-Zhongzhi Big Data Institute for Finance and Investment, and the Deputy Director of the Shanghai Financial Big Data Joint Innovation Laboratory. He also holds concurrent professorships at the School of Data Science, Fudan University and Yanqi Lake Beijing Institute of Mathematical Sciences and Applications. His primary research interests include FinTech, big data finance, green finance, and non-performing asset disposal. Dr. Liu has published over 100 papers in domestic and international journals such as the Journal of Econometrics and the Journal of International Money and Finance, authored three monographs, and hosted over 20 research projects funded by the National Natural Science Foundation of China, the Ministry of Science and Technology, and the Ministry of Education. His research has been recognized multiple times with Best Paper Awards or first-class awards at conferences, and his academic views and interviews have been published and reproduced by many mainstream media outlets.
Authors: Ying Chen, Hoang Hai Tran, Julian Sester and Yijiong Zhang
Abstract: We study the optimal market making problem in order-driven electronic markets, with a focus on model uncertainty. We consider ambiguity in order arrival intensities and derive a robust strategy that can perform under various market conditions. To achieve this, we introduce a tractable model for the limit order book using Markov Decision Processes and develop robust Reinforcement Learning to solve the complex optimization problem. This approach enables us to accurately represent the order book dynamics with tick structures, as opposed to the usual price dynamics modeled in stochastic approaches.
Authors: Maria Grith, Ying Chen and Hannah L. H. Lai
Abstract: Implied volatility (IV) remains a pivotal yet intricate component of financial markets, posing continuous challenges for accurate forecasting. We present a Nonlinear Functional Autoregression framework tailored to the series of implied volatility surfaces-a dynamic domain influenced by moneyness and maturity. This approach, designed for European put and call options, adeptly navigates the nonlinear and asymmetric temporal and spatial dependencies intrinsic to IV. Central to our approach is the functional Neural Tangent Kernel (fNTK) estimator. Grounded in the Neural Tangent Kernel parameterization, this estimator presents a modern statistical solution, parsing the intertwined dependencies found within the projections on the covariance operator of the IV surfaces' time series. A methodological contribution lies in establishing the nexus between fNTK and kernel regression, emphasizing fNTK's place in contemporary nonparametric statistical modelling. Transitioning from methodology to empirical evidence, we showcase the framework's real-world utility via an analysis of the S&P 500 index spanning January 2009 to December 2021. The fNTK not only stands out in forecasting accuracy, achieving an average improvement of 4.54% to 39.44% in RMSE forecast accuracy for 5 to 20-day-ahead forecasts but also paves the way for practical trading strategies. When underpinned by our model, straddle trading yields a remarkable Sharpe ratio between 3.72 and 5.41, culminating in a staggering 107.79% to 528.36% relative enhancement in trading results. This convergence of methodological rigor and empirical results offers both a statistical advancement and a beacon for practitioners in the options market.
Authors: Ruting Wang
Abstract: TBA
Ruting Wang Postdoc, Business School, Sun Yat-sen University
Authors: Alla Petukhina, Anastasija Tetereva
Abstract: This research extends the Machine Learning method Random Forest to perform portfolio optimization under the Markowitz framework. We will be considering a multi-asset portfolio composed by stocks, bonds, credits, high-yields, and commodities, while using macroeco- nomic and markets indicators to identify conditions under which the allocations are depend- ent on. The research demonstrates that the proposed methodology outperforms a classical Equal-Weight portfolio, as well as the plain Sharpe Ratio maximization and through the use of Hidden Markov Model for regime identification, even when taking into account trading costs. The results hold when shifting from an expanding window to a rolling window, and are further explored when considering restrains on turnover. We further make use of Accumulated Local Effects (ALE) plots in order to have insights on the black-box that drives the model and make inferences on how the asset classes perform under different macroeconomic and market conditions.
Jan Kalina
Abstract: Multilayer perceptrons and radial basis function networks represent established tools for nonlinear regression modeling with numerous applications in various fields. Their standard training is vulnerable with respect to the presence of outliers in the data. We propose several novel (possibly regularized) versions, which are based on robust loss functions inspired by robust linear regression. Robust inter-quantile versions are also proposed. Intensive numerical experiments reveal the robust versions to be meaningful for data contaminated by outliers.
Jan Kalina Researcher, Charles University
Authors: Jan Vecer
Abstract: I present a novel approach to portfolio optimization that completely bypasses utility maximization. The idea is based on a very well known fact from option pricing that prices are likelihood ratios of the two state price densities of the asset and the reference asset. Using this fact, one trivially concludes that the price of the optimal portfolio is simply a likelihood ratio of the physical measure used by the agent and the risk neutral density.Furthermore, a well known property of the likelihood ratio is that the expected log likelihood is the relative entropy. As a consequence, it means that the expected log returns of the portfolios are maximized for the solution in the form of the likelihood ratio. In other words,the prices are log utility optimal, but this is a consequence rather than the design. A general utility maximization can be viewed as a method how to alter the physical measure to a measure closer to the risk neutral measure in the relative entropy sense. As the general methods of utility maximization often require solving complicated problems based on stochastic optimal control, we contrast these methods with standard approachesused in both the frequentist and the Bayesian statistics, where the choice of the physical measure is essentially ad hoc. That we are in business is supported by the fact that we can reconstruct the solution of the Merton's portfolio problem as a trivial consequence. Markowitz's approach to portfolio optimization can be viewed as a second order approximation of the log utility function. In addition, our approach allows for solving portfolio problems that have not even been previously considered, such as optimal portfolio problems in driftless (mean reverting) markets.
Authors: Z.Praskova
Abstract: TBA
Zuzana Prášková Professor, Charles University
Authors: Matus Horvath
Abstract: TBA
Matus Horvath PhD Student, Masaryk University
Authors: Jurgita CerneviCiene and Audrius Kabasinskas
Abstract: Financial crisis recognition and prediction become a considerable challenge for avoiding the bankruptcy of financial institutions and companies. The early warning methods of the financial crisis are an excellent tool for financial risk managers to reduce the likelihood of failures. The widespread use of Machine Learning (ML) in finance increased the need for Explainable Artificial Intelligence (XAI) to improve interpretability and uncover hidden relationships between variables for financial decision-making while meeting regulatory requirements. Feature selection is crucial to reduce the number of variables in large datafederations while preserving important information as much as possible. In this study, we assess the possibility of an early warning technique for financial crisis detection by applying ML techniques to performance and risk measures of financial data from 2019-2022 period. Experimental findings indicate that some ML methods could be used to predict crises. Finally, we address concerns related to feature selection by analyzing LIME (Local Interpretable Model-agnostic Explanations) values, a technique that has recently attracted increasing attention in XAI, to interpret the results of black-box methods.
Jurgita Cerneviciene PhD Student, Kaunas University of Technology
Authors: Monika Kaľatová
Abstract:
Monika Kaľatová PhD student, Charles University
Authors: Jana Junova
Abstract:
Jana Junova PhD student, Charles University
Authors: Karel Kozmik
Abstract:
Karel Kozmik PhD student, Charles University
Authors: Petr Vejmelka
Abstract:
Petr Vejmelka PhD student, Charles University
Authors: Monika Matouskova
Abstract: TBA
Monika Matouskova PhD student, Charles University
Authors: Jenher Jeng
Abstract: The formation of knowledge in the human brain can be regarded as a fusion process of information, and this process is fundamentally based on the human association ability. For machine learning techniques to mimic how the human brain connects the dots of key information to assembly knowledge modules, the challenges of building a system of artificial general intelligence (AGI) could go well beyond the level of semantic analysis in natural language processing (NLP), e.g., ChatGPT’s large language model (LLM). Naturally speaking, it is common to find ideas or insights which are extremely difficult to be described in linguistic statements. We will demonstrate a man-machine co-learning process to build a personalized knowledge-base (PKB) as an embryo of digital brain to store a human’s thoughts and ideas to build a “digital twin” based on a three-layer architecture of knowledge graph which not only allows machine learning to overcome the curse of dimensionality on complex graph computing, but also provides a meta-framework to upgrade LLMs, develop a new operation system for designing genuinely personalized computers, and even allow neuroscience researchers to explore the twilight zone of consciousness in quantitative depth of quantum computing.
Jenher Jeng , National Taipei University of Technology and Taiwan Chamber of Industry and Commerce
Director of TuringEuler Project at National Taipei University of Technology Committee of IT Software, Taiwan Chamber of Industry and Commerce